39 research outputs found

    Continuous Time Identification in Laplace Domain

    Get PDF
    We give a simple and accurate method for estimating the paramters of continuous time systems under the constraint that all the poles of the system lie to the left of the line s = -1. The method relies on the simple solution of a linear system of equations in the complex domain. We demonstrate by the use of simulation that the proposed methods gives accruate estimates when compared to existing methods. Methods for obtaining sparse solutions, which help in determining the order of the system are also give

    Fast Algorithm Development for SVD: Applications in Pattern Matching and Fault Diagnosis

    Get PDF
    The project aims for fast detection and diagnosis of faults occurring in process plants by designing a low-cost FPGA module for the computation. Fast detection and diagnosis when the process is still operating in a controllable region helps avoiding the further advancement of the fault and reduce the productivity loss. Model-based methods are not popular in the domain of process control as obtaining an accurate model is expensive and requires an expertise. Data-driven methods like Principal Component Analysis(PCA) is a quite popular diagnostic method for process plants as they do not require any model. PCA is widely used tool for dimensionality reduction and thus reducing the computational e�ort. The trends are captured in prinicpal components as it is di�cult to have a same amount of disturbance as simulated in historical database. The historical database has multiple instances of various kinds of faults and disturbances along with normal operation. A moving window approach has been employed to detect similar instances in the historical database based on Standard PCA similarity factor. The measurements of variables of interest over a certain period of time forms the snapshot dataset, S. At each instant, a window of same size as that of snapshot dataset is picked from the historical database forms the historical window, H. The two datasets are then compared using similarity factors like Standard PCA similarity factor which signi�es the angular di�erence between the principal components of two datasets. Since many of the operating conditions are quite similar to each other and signi�cant number of mis-classi�cations have been observed, a candidate pool which orders the historical data windows on the values of similarity factor is formed. Based on the most detected operation among the top-most windows, the operating personnel takes necessary action. Tennessee Eastman Challenge process has been chosen as an initial case study for evaluating the performance. The measurements are sampled for every one minute and the fault having the smallest maximum duration is 8 hours. Hence the snapshot window size, m has been chosen to be consisting of 500 samples i.e 8.33 hours of most recent data of all the 52 variables. Ideally, the moving window should replace the oldest sample with a new one. Then it would take approximately the same number of comparisons as that of size of historical database. The size of the historical database is 4.32 million measurements(past 8years data) for each of the 52 variables. With software simulation on Matlab, this takes around 80-100 minutes to sweep through the whole 4.32 million historical database. Since most of the computation is spent in �nding principal components of the two datasets using SVD, a hardware design has to be incorporated to accelerate the pattern matching approach. The thesis is organized as follows: Chapter 1 describes the moving window approach, various similarity factors and metrics used for pattern matching. The previous work proposed by Ashish Singhal is based on skipping few samples for reducing the computational e�ort and also employs windows as large as 5761 which is four days of snapshot. Instead, a new method which skips the samples when the similarity factor is quite low has been proposed. A simpli�ed form of the Standard PCA similarity has been proposed without any trade-o� in accuracy. Pre-computation of historical database can also be done as the data is available aprior, but this requires a large memory requirement as most of the time is spent in read/write operations. The large memory requirement is due to the fact that every sample will give rise to 52�35 matrix assuming the top-35 PC's are sufficient enough to capture the variance of the dataset. Chapter 2 describes various popular algorithms for SVD. Algorithms apart from Jacobi methods like Golub-Kahan, Divide and conquer SVD algorithms are brie y discussed. While bi-diagonal methods are very accurate they suffer from large latency and computationally intensive. On the other hand, Jacobi methods are computationally inexpensive and parallelizable, thus reducing the latency. We also evaluted the performance of the proposed hybrid Golub-Kahan Jacobi algorithm to our application. Chapter 3 describes the basic building block CORDIC which is used for performing rotations required for Jacobi methods or for n-D householder re ections of Golub-Kahan SVD. CORIDC is widely employed in hardware design for computing trigonometric, exponential or logarithmic functions as it makes use of simple shift and add/subtract operations. Two modes of CORDIC namely Rotation mode and Vectoring mode are discussed which are used in the derivation of Two-sided Jacobi SVD. Chapter 4 describes the Jacobi methods of SVD which are quite popular in hardware implementation as they are quite amenable to parallel computation. Two variants of Jacobi methods namely One-sided and Two-sided Jacobi methods are brie y discussed. Two-sided Jacobi making making use of CORDIC has has been derived. The systolic array implementation which is quite popular in hardware implementation for the past three decades has been discussed. Chapter 5 deals with the Hardware implementation of Pattern matching and reports the literature survey of various architectures developed for computing SVD. Xilinx ZC7020 has been chosen as target device for FPGA implementation as it is inexpensive device with many built-in peripherals. The latency reports with both Vivado HLS and Vivado SDSoC are also reported for the application of interest. Evaluation of other case studies and other datadriven methods similar to PCA like Correspondence Analysis(CA) and Independent Component Analysis(ICA), development of efficient hybrid method for computing SVD in hardware and highly discriminating similarity factor, extending CORDIC to n-dimensions for householder re ections have been considered for future research

    Fault Tolerant Power Balancing Strategy in an Isolated Microgrid via Optimization

    Get PDF
    The increasing penetration of renewable energy generation (REG) in t he m icrogrid paradigm has brought with it larger uncertainty in the scheduled generation. This along with the inevitable variation between actual load and for ecasted load has further accentuated the issue of real - time power balancing. With the advent of sm art loads and meters supported by advanced communication technologies , several new possibilities for demand side management have opened up . In this paper , a real time optimization strategy for load side energy management system (EMS) and for power balance is proposed. The proposed strategy achieves power balance by optimizing load reduction . The objective is to ascertain uninterrupted power to critical loads and reduce non - critical loads depending on the priorities for various loads. To further enhance the flexibility of the system, the addition of a battery to the management model is also discussed. The proposed algorithm also makes the system tolerant to possible generator failures if battery is added to the system. The effectiveness of the proposed online power balancing strategy via optimization is demonstrated through various simulation case studies

    Fault Diagnosis In Batch Process Monitoring

    Get PDF
    Every process plant nowadays highly complex to produce high-quality products and to satisfy de- mands in time. Other than that, plant safety is also crucial event had to be taken care to increase plant e�ciency. Due to poor monitoring strategies leads to huge loss of income and valuable time to regain its normal behavior. So, when there is any fault occurs in the plant it should be detected and need to take supervisory action before propagating it to new locations and new equipment failure leads to plant halt. Therefore process monitoring is very crucial event had to be done e�ectively. In Chapter 1 Importance of fault detection and diagnosis(FDD) in plant monitoring, what are the typical situations will leads to fault and their causes of fault is discussed. How data will be transformed in di�erent stages in diagnostic system before certain action, desirable characteristics for good diagnostic systems are discussed brie y. And in �nal part of this chapter what are the basic classi�cations of FDD methods are discussed. Principle component analysis is multivariate statistical technique helps to extract major information with few dimensions. Dimensionality of reduced space is very low compared to original dimension of data set. Number of principle component(PC) selection depends on variability or information required in lower dimensional space. So PCA is e�ective dimensionality reduction technique. But for process monitoring both PC and residual space are important. In chapter 2 mainly discussed about PCA and its theory. Batch Process Monitoring is relatively not easy to monitor compared to Continuous process be- cause of their dynamic nature and non-linearity in the data. So there are methods like MPCA(multi- way Principle component analysis), MCA(multi-way correspondence analysis) and Kernal PCA, Dis- similarity Index based(DISSIM) etc., are there to monitor batch process. Kernal based methods need to choose right kernal based on the non-linearity in the data. Dissimilarity Index based methods well suits for continuous process monitoring since it can able to detect the changes in distribution of data. Extension of DISSIM method to batch process monitoring is EDISSIM, which is discussed in chapter 3. And also MPCA is very traditional method which can able to detect abnormal sample but these cannot be able to detect small mean deviations in measurements. Multi Way PCA is applied after unfolding the data. Batch data Unfolding discussed in section 3.2 and selection of control lim- its discussed in 3.2.3. Apart from these methods there is another strategy called Pattern matching method introduced by Johannesmeyer. This method will helps to quickly locate the similar patterns in historical database. In Process industries we frequently collect the data so that there will be lot of data available. But there will be less information containing in it, used PCA to extract main information. In pattern matching strategy to detect the similar patterns in historical data base we need to provide some quantitative measure of similarity between two data sets those are similarity factors. So by using PCA method we are extracting high informative data in lower dimensional space. So Using PCA method similarity factors are calculated. Di�erent similarity factors and their calculation is shown in chapter 4. On-line monitoring of Acetone Butanol batch process discussed using pattern matching strategy. Acetone Butanol fermentation process mathematical model will be simulated to di�erent nominal values with di�erent operating conditions to develop historical database. In this case study there will be 500 batches with �ve operations conditions like one NOC and 4 di�erent faulty operation batches. In each batch there will be 100 batches. After calculation of similarity factors instead of going for candidate pool selection directly we are trying to detect the batches which are similar to snapshot data. Performance of On-line monitoring using pattern matching strategy is discussed. On-line monitoring strategy will change the way we are anticipating iv the un�lled data. Here we are trying to �ll with reference batch data. Reference data will be average of NOC batches. The performance of this method veri�ed in MATLAB as shown in section 4.3. In Chapter 5 described average PC's(Principle components) model. This method will helps to decrease the e�orts in candidate pool selection and evaluation to �nd snapshot data in historical database. And also Incremental average model building and model updating will improves the quality of model ultimately.In incremental average model building If any of the snapshot data classi�ed as any of the already existed operating condition data set it will be used in building average model. If not existed in any of the operating condition data set utilized to update average model. This method applied on Acetone Butanol fermentation process data and veri�ed. Because of the fact that batch data highly non linear in nature So PCA not able to handle non-linear correlations. And pattern matching approach using PCA average model not give good discrimination. For better discrimination ability and self aggregation can be possible using Corresponding Analysis because of non-linear scaling. In chapter 6 pattern matching approach using corresponding analysis has been discussed brie y. Results obtained using CA based similarity factor displayed for Acetone Butanol fermentation process case study

    Nearest Neighbour Based Algorithm for Data Reduction and Fault Diagnosis

    No full text
    Dimensionality reduction is one of the prime concerns when analyzing process historical data for plant-wide monitoring, because this can significantly reduce computational load during statistical model building. Most research has been concerned with reducing the dimension along the variable space, i.e. reducing the number of columns. However, no efforts are made to reduce dimensions along the sample (row) space. In this paper, an algorithm based on nearest neighbor is presented here that exploits the principle of distributional equivalence (PDE) property of the correspondence analysis (CA) algorithm to achieve data reduction along the sample space without significantly affecting the diagnostic performance. The data reduction algorithm presented here is unsupervised and can achieve significant data reduction when used in conjunction with CA. The data reduction ability of the proposed methodology is demonstrated using the benchmark Tennessee Eastman process simulation case stud

    Optimal autonomous microgrid operation: A holistic view

    No full text
    The prospects of incorporating a consumer side load-scheduling algorithm that works in conjunction with the unit commitment problem, which in turn coordinates with real-time load balancer, are discussed in this paper. An integrated framework for an autonomous microgrid with objectives of increasing stability, reliability and economy is proposed. From the microgrid operators’ point of view, the load side scheduling helps reduce the stress on the system especially during peak hours thereby ensuring system stability and security. From the consumers’ point of view, the dynamic electricity prices within a day, which are a reflection of this time varying stress on the system, encourage them to endorse such a scheme and reduce their bills incurred. The unit commitment problem is run a day in advance to determine generator outputs for the following day. Owing to unpredictable weather conditions, running unit commitment problem in advance does not guarantee planned real-time generation in the microgrid scenario. Such variability in forecasted generation must be handled in any microgrid, while accounting for load demand uncertainties. To address this issue a load side energy management system and power balance scheme is proposed in this paper. The objective is to ascertain uninterrupted power to critical loads while managing other non-critical loads based on their priorities

    Detuning Iterative Continuous Cycling based Multi-loop PI control for multivariable processes

    No full text
    Encountering multivariable systems in process industries is quite common. Along with effectiveness and robustness, simplicity and easy scalability are the utmost requirements expected in a control system design. In this regard, we propose the Detuning Iterative Continuous Cycling (DICC) method for decentralized PI control of multi-input multi-output (MIMO) processes. The proposed DICC design utilizes the idea of continuous cycling for obtaining the ultimate parameters for the effective open-loop transfer functions (EOTFs). While for systems the controller settings are easily derived for the EOTFs, controller tuning for higher dimensional systems is challenging due to complicated EOTF dynamics. Therefore, the effective transfer function (ETF) description of the large scale MIMO system is used for obtaining the ultimate parameters during the closed loop continuous cycling test. Thereafter for obtaining multi-loop PI controller settings, the derived ultimate parameters for the EOTFs/ETFs are subjected to appropriate detuning adjustments. The wide applicability, effectiveness, simplicity and easy scalability of the proposed DICC method has been demonstrated by considering various 2×2,3×32 \times 2, 3 \times 3 and 4×44 \times 4 dimensional MIMO systems. Further, robustness of the proposed design has also been tested by introducing a plant-model mismatch of ± 10% during the closed-loop simulations

    Controlled power point tracking for autonomous operation of PMSG based wind energy conversion system

    No full text
    With continuous depletion of conventional sources of energy, Wind Energy Conversion Systems (WECSs) are turning out to be one of the major players with immense potential to meet the future energy demands. These WECSs are quickly becoming primary source of energy in coastal regions, islands, where they operate in autonomous mode. In this paper, a control strategy for controlled power extraction from a WECS, operating in islanded mode is presented. The proposed control strategy enables limited as well as maximum power extraction from WECSs with desired load voltage profile while minimizing the installation as well as the operating costs associated with the use of expensive batteries in the system. The motive behind using batteries in the system is to facilitate transient stability and enhance reliability. As opposed to pitch angle control, in the present work, real power control is attained by field-oriented control (FOC) of permanent magnet synchronous generator (PMSG). The operating point of the WECS is decided based on the wind turbine characteristics and the demanded power. Proper decoupling and feed-forward techniques have been deployed to eliminate cross-coupling and mitigate the effect of load side disturbances. Simulations are carried out under varying load demand as well as changing weather conditions to demonstrate the applicability and effectiveness of the proposed control strateg

    Generalized framework for decoupler design

    No full text
    With the benefits of suppressing the interaction effects in a multivariable system, the decoupling control brings in several other challenges and difficulties. For TITO processes, the decoupling control remains simple and has been demonstrated in the earlier works. But as the dimensionality of the system increases, the complexities in the decoupling control blows up differently for various decoupling schemes. Therefore, the TITO decoupling framework cannot be simply extended to higher dimensional processes for all decoupling control design methodologies. In this paper, we present a generalized framework for decoupler design for an n-dimensional multivariable system. This work also highlights the challenges and difficulties involved in the decoupler design for higher dimensional systems. Simulations have been carried out to verify how well the decoupling schemes work in suppressing the interaction effects. Further, robustness of the decoupling schemes against plant-model mismatch has been tested by evaluating various performance indices
    corecore